1 research outputs found
Training Large-Vocabulary Neural Language Models by Private Federated Learning for Resource-Constrained Devices
Federated Learning (FL) is a technique to train models using data distributed
across devices. Differential Privacy (DP) provides a formal privacy guarantee
for sensitive data. Our goal is to train a large neural network language model
(NNLM) on compute-constrained devices while preserving privacy using FL and DP.
However, the DP-noise introduced to the model increases as the model size
grows, which often prevents convergence. We propose Partial Embedding Updates
(PEU), a novel technique to decrease noise by decreasing payload size.
Furthermore, we adopt Low Rank Adaptation (LoRA) and Noise Contrastive
Estimation (NCE) to reduce the memory demands of large models on
compute-constrained devices. This combination of techniques makes it possible
to train large-vocabulary language models while preserving accuracy and
privacy